The advances in language-based Artificial Intelligence (AI) technologies applied to build educational applications can present AI for social-good opportunities with a broader positive impact. Across many disciplines, enhancing the quality of mathematics education is crucial in building critical thinking and problem-solving skills at younger ages. Conversational AI systems have started maturing to a point where they could play a significant role in helping students learn fundamental math concepts. This work presents a task-oriented Spoken Dialogue System (SDS) built to support play-based learning of basic math concepts for early childhood education. The system has been evaluated via real-world deployments at school while the students are practicing early math concepts with multimodal interactions. We discuss our efforts to improve the SDS pipeline built for math learning, for which we explore utilizing MathBERT representations for potential enhancement to the Natural Language Understanding (NLU) module. We perform an end-to-end evaluation using real-world deployment outputs from the Automatic Speech Recognition (ASR), Intent Recognition, and Dialogue Manager (DM) components to understand how error propagation affects the overall performance in real-world scenarios.
translated by 谷歌翻译
Emotions are an integral part of human cognition and they guide not only our understanding of the world but also our actions within it. As such, whether we soothe or flame an emotion is not inconsequential. Recent work in conversational AI has focused on responding empathetically to users, validating and soothing their emotions without a real basis. This AI-aided emotional regulation can have negative consequences for users and society, tending towards a one-noted happiness defined as only the absence of "negative" emotions. We argue that we must carefully consider whether and how to respond to users' emotions.
translated by 谷歌翻译
Targeted syntactic evaluations of language models ask whether models show stable preferences for syntactically acceptable content over minimal-pair unacceptable inputs. Most targeted syntactic evaluation datasets ask models to make these judgements with just a single context-free sentence as input. This does not match language models' training regime, in which input sentences are always highly contextualized by the surrounding corpus. This mismatch raises an important question: how robust are models' syntactic judgements in different contexts? In this paper, we investigate the stability of language models' performance on targeted syntactic evaluations as we vary properties of the input context: the length of the context, the types of syntactic phenomena it contains, and whether or not there are violations of grammaticality. We find that model judgements are generally robust when placed in randomly sampled linguistic contexts. However, they are substantially unstable for contexts containing syntactic structures matching those in the critical test content. Among all tested models (GPT-2 and five variants of OPT), we significantly improve models' judgements by providing contexts with matching syntactic structures, and conversely significantly worsen them using unacceptable contexts with matching but violated syntactic structures. This effect is amplified by the length of the context, except for unrelated inputs. We show that these changes in model performance are not explainable by simple features matching the context and the test inputs, such as lexical overlap and dependency overlap. This sensitivity to highly specific syntactic features of the context can only be explained by the models' implicit in-context learning abilities.
translated by 谷歌翻译
This paper describes the 5th edition of the Predicting Video Memorability Task as part of MediaEval2022. This year we have reorganised and simplified the task in order to lubricate a greater depth of inquiry. Similar to last year, two datasets are provided in order to facilitate generalisation, however, this year we have replaced the TRECVid2019 Video-to-Text dataset with the VideoMem dataset in order to remedy underlying data quality issues, and to prioritise short-term memorability prediction by elevating the Memento10k dataset as the primary dataset. Additionally, a fully fledged electroencephalography (EEG)-based prediction sub-task is introduced. In this paper, we outline the core facets of the task and its constituent sub-tasks; describing the datasets, evaluation metrics, and requirements for participant submissions.
translated by 谷歌翻译
The Predicting Media Memorability task in the MediaEval evaluation campaign has been running annually since 2018 and several different tasks and data sets have been used in this time. This has allowed us to compare the performance of many memorability prediction techniques on the same data and in a reproducible way and to refine and improve on those techniques. The resources created to compute media memorability are now being used by researchers well beyond the actual evaluation campaign. In this paper we present a summary of the task, including the collective lessons we have learned for the research community.
translated by 谷歌翻译
We analyze the problem of detecting tree rings in microscopy images of shrub cross sections. This can be regarded as a special case of the instance segmentation task with several particularities such as the concentric circular ring shape of the objects and high precision requirements due to which existing methods don't perform sufficiently well. We propose a new iterative method which we term Iterative Next Boundary Detection (INBD). It intuitively models the natural growth direction, starting from the center of the shrub cross section and detecting the next ring boundary in each iteration step. In our experiments, INBD shows superior performance to generic instance segmentation methods and is the only one with a built-in notion of chronological order. Our dataset and source code are available at http://github.com/alexander-g/INBD.
translated by 谷歌翻译
We present AI-SDC, an integrated suite of open source Python tools to facilitate Statistical Disclosure Control (SDC) of Machine Learning (ML) models trained on confidential data prior to public release. AI-SDC combines (i) a SafeModel package that extends commonly used ML models to provide ante-hoc SDC by assessing the vulnerability of disclosure posed by the training regime; and (ii) an Attacks package that provides post-hoc SDC by rigorously assessing the empirical disclosure risk of a model through a variety of simulated attacks after training. The AI-SDC code and documentation are available under an MIT license at https://github.com/AI-SDC/AI-SDC.
translated by 谷歌翻译
人类可以利用身体互动来教机器人武器。当人类的动力学通过示范引导机器人时,机器人学习了所需的任务。尽管先前的工作重点是机器人学习方式,但对于人类老师来说,了解其机器人正在学习的内容同样重要。视觉显示可以传达此信息;但是,我们假设仅视觉反馈就错过了人与机器人之间的物理联系。在本文中,我们介绍了一类新颖的软触觉显示器,这些显示器包裹在机器人臂上,添加信号而不会影响相互作用。我们首先设计一个气动驱动阵列,该阵列在安装方面保持灵活。然后,我们开发了这种包裹的触觉显示的单一和多维版本,并在心理物理测试和机器人学习过程中探索了人类对渲染信号的看法。我们最终发现,人们以11.4%的韦伯(Weber)分数准确区分单维反馈,并以94.5%的精度确定多维反馈。当物理教授机器人臂时,人类利用单维反馈来提供比视觉反馈更好的演示:我们包装的触觉显示会降低教学时间,同时提高演示质量。这种改进取决于包裹的触觉显示的位置和分布。您可以在此处查看我们的设备和实验的视频:https://youtu.be/ypcmgeqsjdm
translated by 谷歌翻译
以富有成效和有效的方式处理和分析表格数据对于在医疗保健等领域的成功应用程序中的成功应用至关重要。但是,缺乏代表和标准化表格信息的统一框架对研究人员和专业人员都构成了重大挑战。在这项工作中,我们介绍了TabText,一种利用语言的非结构化数据格式的方法论,可以有效,准确地从不同的表结构和时间段编码表格数据。我们使用两个医疗保健数据集和四个预测任务,这些任务通过TabText提取的特征优于传统处理方法提取的那些提取的任务,而这些任务的功能却高于2-5%。此外,我们分析了框架对缺失价值观,元信息和语言描述性句子表示的不同选择的敏感性,并为赢得改善绩效的策略提供了见解。
translated by 谷歌翻译
为了开发直肠癌的自动化工作流程,三维形成式放射治疗计划,结合了深度学习(DL)孔径预测和前向规划算法。我们设计了一种算法来自动化临床工作流程,以使用现场场地进行计划。对555名患者进行了训练,验证和测试DL模型,以自动生成一级和增强场的光圈形状。网络输入是数字重建的X射线照相,总肿瘤体积(GTV)和Nodal GTV。一名医师以5分制(> 3个可以接受)为20名患者的每个孔径为每个孔径评分。然后开发了一种计划算法,以使用楔形和子场的组合创建均匀剂量。该算法迭代识别热点卷,创建子字段并在没有用户干预的情况下优化光束重量。使用具有不同设置的临床光圈对20例患者进行了测试,并由医生评分结果计划(4例计划/患者)。端到端的工作流程通过医生对39名使用DL生成的孔径和计划算法进行了测试和评分。预测的孔的骰子得分分别为0.95、0.94和0.90,分别为侧面,外侧和升压场。 100%,95%和87.5%的后侧,外侧和升压孔分别为临床上可接受。在85%和50%的患者中,楔形计划和非界定计划在临床上是可以接受的。最终计划的热点剂量百分比从121%($ \ $ 14%)降低到处方剂量的109%($ \ pm $ 5%)。自动生成的光圈和优化现场计划的综合端到端工作流程为38/39(97%)的患者提供了可接受的计划。我们已经成功地自动化了临床工作流程,以为我们的机构生成放射疗法计划。
translated by 谷歌翻译